107 research outputs found

    Towards infield, live plant phenotyping using a reduced-parameter CNN

    Get PDF
    © 2019, The Author(s). There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device

    Adaptive tracking via multiple appearance models and multiple linear searches

    Get PDF
    We introduce a unified tracker (FMCMC-MM) which adapts to changes in target appearance by combining two popular generative models: templates and histograms, maintaining multiple instances of each in an appearance pool, and enhances prediction by utilising multiple linear searches. These search directions are sparse estimates of motion direction derived from local features stored in a feature pool. Given only an initial template representation of the target, the proposed tracker can learn appearance changes in a supervised manner and generate appropriate target motions without knowing the target movement in advance. During tracking, it automatically switches between models in response to variations in target appearance, exploiting the strengths of each model component. New models are added, automatically, as necessary. The effectiveness of the approach is demonstrated using a variety of challenging video sequences. Results show that this framework outperforms existing appearance based tracking frameworks

    From Codes to Patterns: Designing Interactive Decoration for Tableware

    Full text link
    ABSTRACT We explore the idea of making aesthetic decorative patterns that contain multiple visual codes. We chart an iterative collaboration with ceramic designers and a restaurant to refine a recognition technology to work reliably on ceramics, produce a pattern book of designs, and prototype sets of tableware and a mobile app to enhance a dining experience. We document how the designers learned to work with and creatively exploit the technology, enriching their patterns with embellishments and backgrounds and developing strategies for embedding codes into complex designs. We discuss the potential and challenges of interacting with such patterns. We argue for a transition from designing ‘codes to patterns’ that reflects the skills of designers alongside the development of new technologies

    Potential of geoelectrical methods to monitor root zone processes and structure: a review

    Get PDF
    Understanding the processes that control mass and energy exchanges between soil, plants and the atmosphere plays a critical role for understanding the root zone system, but it is also beneficial for practical applications such as sustainable agriculture and geotechnics. Improved process understanding demands fast, minimally invasive and cost-effective methods of monitoring the shallow subsurface. Geoelectrical monitoring methods fulfil these criteria and have therefore become of increasing interest to soil scientists. Such methods are particularly sensitive to variations in soil moisture and the presence of root material, both of which are essential drivers for processes and mechanisms in soil and root zone systems. This review analyses the recent use of geoelectrical methods in the soil sciences, and highlights their main achievements in focal areas such as estimating hydraulic properties and delineating root architecture. We discuss the specific advantages and limitations of geoelectrical monitoring in this context. Standing out amongst the latter are the non-uniqueness of inverse model solution and the appropriate choice of pedotransfer functions between electrical parameters and soil properties. The relationship between geoelectrical monitoring and alternative characterization methodologies is also examined. Finally, we advocate for future interdisciplinary research combining models of root hydrology and geoelectrical measurements. This includes the development of more appropriate analogue root electrical models, careful separation between different root zone contributors to the electrical response and integrating spatial and temporal geophysical measurements into plant hydrological models to improve the prediction of root zone development and hydraulic parameters

    TRIC-track: tracking by regression with incrementally learned cascades

    Get PDF
    This paper proposes a novel approach to part-based track- ing by replacing local matching of an appearance model by direct prediction of the displacement between local image patches and part locations. We propose to use cascaded regression with incremental learning to track generic objects without any prior knowledge of an object’s structure or appearance. We exploit the spatial constraints between parts by implicitly learning the shape and deformation parameters of the object in an online fashion. We integrate a multiple temporal scale motion model to initialise our cascaded regression search close to the target and to allow it to cope with occlusions. Experimental results show that our tracker ranks first on the CVPR 2013 Benchmark

    Automated recovery of 3D models of plant shoots from multiple colour images

    Get PDF
    Increased adoption of the systems approach to biological research has focussed attention on the use of quantitative models of biological objects. This includes a need for realistic 3D representations of plant shoots for quantification and modelling. Previous limitations in single or multi-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed, and as such is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on datasets of wheat and rice plants, as well as a novel virtual dataset that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modelling applications, in a format that can be imported in the majority of 3D graphics and software packages

    Active Vision and Surface Reconstruction for 3D Plant Shoot Modelling

    Get PDF
    Plant phenotyping is the quantitative description of a plant’s physiological, biochemical and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction

    Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields

    Get PDF
    Background Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. Results Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions ([email protected]) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment

    TRIC-track: tracking by regression with incrementally learned cascades

    Get PDF
    This paper proposes a novel approach to part-based track- ing by replacing local matching of an appearance model by direct prediction of the displacement between local image patches and part locations. We propose to use cascaded regression with incremental learning to track generic objects without any prior knowledge of an object’s structure or appearance. We exploit the spatial constraints between parts by implicitly learning the shape and deformation parameters of the object in an online fashion. We integrate a multiple temporal scale motion model to initialise our cascaded regression search close to the target and to allow it to cope with occlusions. Experimental results show that our tracker ranks first on the CVPR 2013 Benchmark

    The spatial character of sensor technology

    Get PDF
    By considering the spatial character of sensor-based interactive systems, this paper investigates how discussions of seams and seamlessness in ubiquitous computing neglect the complex spatial character that is constructed as a side-effect of deploying sensor technology within a space. Through a study of a torch (`flashlight') based interface, we develop a framework for analysing this spatial character generated by sensor technology. This framework is then used to analyse and compare a range of other systems in which sensor technology is used, in order to develop a design spectrum that contrasts the revealing and hiding of a system's structure to users. Finally, we discuss the implications for interfaces situated in public spaces and consider the benefits of hiding structure from users
    corecore